Goto

Collaborating Authors

 uncertainty-aware self-training


Uncertainty-aware Self-training for Few-shot Text Classification

Neural Information Processing Systems

Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines.


Review for NeurIPS paper: Uncertainty-aware Self-training for Few-shot Text Classification

Neural Information Processing Systems

Weaknesses: My main concerns are on the experiments. While the authors make effort to perform ablation analysis, I think there are still some important missing ablations to convince me that such BNN-powerd self-training scheme is better than classic ST: (1) The proposed method always uses smart sample selection strategy while the classic ST baseline in this paper does not select samples or just select them uniformly. It is very common for classic ST to select samples based on confidence scores, which can be class-dependent as well. Thus I feel that the comparison made with classic ST is not very fair. I would like to see the comparison between UST removing Conf and classic ST with confidence-based and class-dependent sample selection, or just replace the sample selection part in full UST with confidence-score-based selection to see what happens, otherwise I don't see any direct evidence to show that the BNN-powered "uncertainty-awareness" is better than simple confidence-score-based baseline.


Review for NeurIPS paper: Uncertainty-aware Self-training for Few-shot Text Classification

Neural Information Processing Systems

This work presents a novel approach of integrating uncertainty into self-training to obtain strong results on text classification with very few labels. The work compares against a strong set of baselines and has extensive ablations. The reviewers agreed the response answered most of their concerns. The work could be improved with more diverse low-resource setups and by improving the clarity of the writing.


Uncertainty-aware Self-training for Few-shot Text Classification

Neural Information Processing Systems

Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training.